Skip to content

Conversation

@ammario
Copy link
Member

@ammario ammario commented Oct 7, 2025

Problem

After merging PR #59, the OpenAI reasoning error still occurred:

Item 'rs_01e84f5b161f6bce0068e480e7778481a3a2c3b6234d6bb7c6' of type 'reasoning' was provided without its required following item.

Root Cause Analysis

The issue was OpenAI-specific. Anthropic reasoning and OpenAI reasoning work differently:

Anthropic

  • Uses text-based reasoning parts in message content
  • These SHOULD be sent back to the API (via sendReasoning: true option, which defaults to true)
  • The model uses historical reasoning context to improve responses

OpenAI

  • Uses encrypted reasoning items with IDs (e.g., rs_*)
  • These are managed automatically via previous_response_id
  • Sending Anthropic-style text reasoning parts creates orphaned reasoning items that cause API errors
  • Per OpenAI docs: "In typical multi-turn conversations, you don't need to include reasoning items or tokens—the model is trained to produce the best output without them"

Solution

Strip reasoning parts ONLY for OpenAI, before converting CmuxMessages to ModelMessages.

  • Added stripReasoningForOpenAI() function for OpenAI-specific processing
  • Apply it conditionally based on provider in aiService.ts
  • Keeps Anthropic behavior intact (reasoning sent via sendReasoning)

Changes

  1. New function: stripReasoningForOpenAI() in modelMessageTransform.ts
  2. Updated aiService.ts: Apply reasoning stripping only for OpenAI provider
  3. Reverted previous change: filterEmptyAssistantMessages() no longer strips all reasoning
  4. Added detailed comments explaining provider-specific differences

This ensures:

  • ✅ OpenAI doesn't get orphaned reasoning items
  • ✅ Anthropic still receives reasoning context
  • ✅ Provider-specific behavior is clearly documented

@ammario ammario force-pushed the tokens branch 3 times, most recently from 9990c1a to 1d093b6 Compare October 7, 2025 03:02
The previous fix added 'reasoning.encrypted_content' to the include option,
but the root cause was that reasoning parts from history were being sent
back to OpenAI's Responses API.

When reasoning parts are included in messages sent to OpenAI, the SDK creates
separate reasoning items with IDs (e.g., rs_*). These orphaned reasoning items
cause errors: 'Item rs_* of type reasoning was provided without its required
following item.'

Solution: Strip reasoning parts from CmuxMessages BEFORE converting to
ModelMessages. Reasoning content is only for display/debugging and should
never be sent back to the API in subsequent turns.

This happens in filterEmptyAssistantMessages() which runs before
convertToModelMessages(), ensuring reasoning parts never reach the API.
Per Anthropic documentation, reasoning content SHOULD be sent back
to Anthropic models via the sendReasoning option (defaults to true).

However, OpenAI's Responses API uses encrypted reasoning items (IDs like rs_*)
that are managed automatically via previous_response_id. Anthropic-style
text-based reasoning parts sent to OpenAI create orphaned reasoning items
that cause 'reasoning without following item' errors.

Changes:
- Reverted filterEmptyAssistantMessages() to only filter reasoning-only messages
- Added new stripReasoningForOpenAI() function for OpenAI-specific stripping
- Apply reasoning stripping only for OpenAI provider in aiService.ts
- Added detailed comments explaining the provider-specific differences
@ammario ammario merged commit a35b2c2 into main Oct 7, 2025
6 checks passed
@ammario ammario deleted the tokens branch October 7, 2025 03:26
ammario added a commit that referenced this pull request Oct 7, 2025
## Problem

Users are experiencing intermittent OpenAI API errors when using
reasoning models with tool calls:
- `Item 'rs_*' of type 'reasoning' was provided without its required
following item`
- `referenced reasoning on a function_call was not provided`

The previous fix (PR #61) stripped reasoning parts entirely, but this
caused new errors and was too aggressive.

## Root Cause

OpenAI's Responses API uses encrypted reasoning items (IDs like `rs_*`)
that are managed automatically via `previous_response_id`. When provider
metadata from stored history is sent back to OpenAI, it references
reasoning items that no longer exist in the current context, causing API
errors.

## Solution

Instead of stripping reasoning content, we now **blank out provider
metadata** on all content parts for OpenAI:
- Clear `providerMetadata` on text and reasoning parts  
- Clear `callProviderMetadata` on tool-call parts

This preserves the reasoning content (which is useful for debugging and
context) while preventing stale metadata references from causing errors.

## Changes

1. **New function**: `clearProviderMetadataForOpenAI()` - operates on
`ModelMessage[]
3. **Fixed**: `splitMixedContentMessages()` now treats reasoning parts
as text parts (they stay together)
4. **Updated**: Tests to reflect that reasoning parts are preserved, not
stripped

## References

- Vercel AI SDK Issue: vercel/ai#7099
- User solution:
https://github.com/gvkhna/vibescraper/blob/f476c768266385affec3b5972790ef7b111da366/packages/website/src/assistant-ai/assistant-prepare-context.ts#L104

## Testing

- ✅ All message transform tests passing
- ✅ Reasoning parts preserved in both OpenAI and Anthropic flows
- ✅ Tool calls work correctly with reasoning
- ✅ Formatting checks pass
ammario added a commit that referenced this pull request Oct 7, 2025
This PR fixes the intermittent OpenAI API error using Vercel AI SDK's
middleware pattern to intercept and transform messages before
transmission.

## Problem
OpenAI's Responses API intermittently returns this error during
streaming:
```
Item 'rs_*' of type 'reasoning' was provided without its required following item
```

The error occurs during **multi-step tool execution** when:
- Model generates reasoning + tool calls
- SDK automatically executes tools and prepares next step
- Tool-call parts contain OpenAI item IDs that reference reasoning items
- When reasoning is stripped but tool-call IDs remain, OpenAI rejects
the malformed input

## Root Cause
OpenAI's Responses API uses internal item IDs (stored in
`providerOptions.openai.itemId`) to link:
- Reasoning items (`rs_*`)
- Function call items (`fc_*`)

When the SDK reconstructs conversation history for multi-step execution:
1. Assistant message includes `[reasoning, tool-call]` parts
2. Tool-call has `providerOptions.openai.itemId: "fc_*"` referencing
`rs_*`
3. Previous middleware stripped reasoning but left tool-call with
dangling reference
4. OpenAI API rejects: "function_call fc_* was provided without its
required reasoning item rs_*"

## Solution
Enhanced **OpenAI reasoning middleware** to strip item IDs when removing
reasoning:

**File: `src/utils/ai/openaiReasoningMiddleware.ts`**
1. Detects assistant messages containing reasoning parts
2. Filters out reasoning parts (OpenAI manages via `previousResponseId`)
3. **NEW:** Strips `providerOptions.openai` from remaining parts to
remove item IDs
4. Prevents dangling references that cause API errors

**Applied in: `src/services/aiService.ts`**
- Wraps OpenAI models with `wrapLanguageModel({ model, middleware })`
- Middleware intercepts messages before API transmission
- Only affects OpenAI (not Anthropic or other providers)

## Testing Results
Tested against real chat history that reliably reproduced the error:

✅ **Turn 1: PASSED** - Previously failed 100% of the time, now succeeds
✅ **Turn 2: PASSED** - Multi-step tool execution works correctly  

The middleware successfully:
- Stripped 15 OpenAI item IDs from tool-call parts (Turn 1)
- Stripped 15 OpenAI item IDs from tool-call parts (Turn 2)  
- Allowed multi-step tool execution without reasoning errors

## Technical Details
**Multi-step execution flow:**
1. User sends message
2. Model generates reasoning + tool calls (Step 1)
3. SDK auto-executes tools
4. SDK prepares Step 2 input: `[system, user,
assistant(reasoning+tools), tool-results]`
5. Middleware strips reasoning + item IDs before sending
6. Step 2 proceeds without API errors

**Why this fixes it:**
- OpenAI Responses API validates item ID references on input
- Removing `providerOptions.openai.itemId` prevents validation errors
- OpenAI tracks context via `previousResponseId`, not message content
- SDK's automatic tool execution works correctly with cleaned messages

## Files Changed
- `src/services/aiService.ts`: Apply middleware to OpenAI models (7
lines)
- `src/utils/ai/openaiReasoningMiddleware.ts`: New middleware with item
ID stripping (112 lines)

## Related Issues
- Fixes OpenAI reasoning errors from vercel/ai SDK issues #7099, #8031,
#8977
- Supersedes previous approaches (PR #61, #68) that didn't use SDK
middleware

_Generated with `cmux`_
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant